{ "cells": [ { "cell_type": "markdown", "id": "096007d3-96ef-4c1a-abfa-0907a31dcef7", "metadata": {}, "source": [ "# Homework 4\n", "\n", "In this homework you will get experience with Q-learning applied to some classic domains from the early literature on reinforcement learning. You'll implement tabular Q-learning, in which the states and actions must be discrete. The underlying domains have discrete action spaces but continuous observation spaces. I've provided code that will convert continuous observations into discrete ones. In a later homework we'll use neural networks to solve these same problems without the need to discretize." ] }, { "cell_type": "markdown", "id": "838aaad0-a5ca-405b-9c2f-4d238f45a8b1", "metadata": {}, "source": [ "## Task 1: Set up your environment\n", "\n", "There is nothing to turn in for this task.\n", "\n", "You'll need to pip install the following packages:\n", "* gymnasium[classic-control] - a collection of RL domains\n", "* tqdm - a tool for monitoring the progress of loops that run a long time\n", "* numpy - a collection of useful tools for \"mathy\" things\n", "* matplotlib - a collection of plotting utilities" ] }, { "cell_type": "code", "execution_count": null, "id": "338bbbea-1ad4-4a5e-b660-46f57674d707", "metadata": {}, "outputs": [], "source": [ "import gymnasium as gym\n", "import numpy as np\n", "import random\n", "from tqdm import tqdm\n", "import matplotlib.pyplot as plt" ] }, { "cell_type": "markdown", "id": "8ceca2fd-f2bd-4840-9d95-8f2ecbca1841", "metadata": {}, "source": [ "## Task 2: Look at the gymnasium documentation\n", "\n", "There is nothing to turn in for this task.\n", "\n", "Gymnasium is a package that has a uniform interface to a variety of domains where RL can be used. If your code works for one of them, it will (in theory) work for all of them. The top-level documentation is here:\n", "\n", "https://gymnasium.farama.org/index.html\n", "\n", "We'll work with 3 domains. Read the documentation for each of them:\n", "* Mountain car - https://gymnasium.farama.org/environments/classic_control/mountain_car/\n", "* Acrobot - https://gymnasium.farama.org/environments/classic_control/acrobot/\n", "* Lunar lander - https://gymnasium.farama.org/environments/box2d/lunar_lander/\n" ] }, { "cell_type": "markdown", "id": "53008c54-b793-49b2-b1b4-6d6e79603a56", "metadata": {}, "source": [ "Each of the domains produces observations that are vectors of real values. For example, as the documentation for the Mountain Car domain says the state has two real values:\n", "* The position of the car on the x axis\n", "* The velocity of the car\n", "\n", "The class below converts real-valued vectors into discrete values. You will experiment with the impacts of using coarse or fine discretization. To turn a given observation that is a real-valued vector into a discrete state, the class below divides the range of each variable into a set of uniformly sized, non-overlapping bins.\n", "\n", "For example, for the Mountain Car the smallest and largest values of x are -1.2 and 0.6, respectively. If you select 5 bins, the size of each bin will be (0.6 - -1.2)/5 = 0.36. They span the following ranges, which get mapped to distinct integers as shown below:\n", "* [-1.2, -0.84) -> 0\n", "* [-0.84, -0.48) -> 1\n", "* [-0.48, -0.12) -> 2\n", "* [-0.12, 0.24) -> 3\n", "* [0.24, 0.6] -> 4\n", "\n", "Each element of an observation gets mapped like this, and the resulting string of digits becomes a key to map to the corresponding discrete state. Each time a new key is found (i.e., the system finds itself in a discrete state that it has never seen before) it is mapped to an integer that can be used to index into the Q-table.\n", "\n", "Note that when the number of bins is small, the system treats lots of underlying observations as the same state. When the number of bins is large, the system can make more fine distinctions but the Q-table gets to be large and you'll need more experience in the domain to learn about all of those states. You will explore that tradeoff below." ] }, { "cell_type": "code", "execution_count": null, "id": "b0b3bdcc-1a2e-4420-995b-6f423c69e5fb", "metadata": {}, "outputs": [], "source": [ "# Let's look at an observation in the mountain car domain\n", "env = gym.make(\"MountainCar-v0\", render_mode=None)\n", "observation, info = env.reset()\n", "observation" ] }, { "cell_type": "code", "execution_count": null, "id": "447e55ee-8984-46ac-8ec7-5706c52645b3", "metadata": {}, "outputs": [], "source": [ "class Discrete:\n", " \n", " def __init__(self, env, nbins):\n", " \"\"\"\n", " Arguments:\n", " env - A Gymnasium environment that was created by a call to gym.make()\n", " nbins - If this is an integer, then each of the elements of an observation\n", " are mapped into nbins non-overlapping intervals whose size is\n", " (high - low) / nbins. If this is a list, then the list must be the\n", " same size as an observation and each element of the list specifies the\n", " number of bins for the corresponding element of an observation.\n", " This makes it possible to use different numbers of bins for \n", " different elements of an observation.\n", " \"\"\"\n", " \n", " nobs = env.observation_space.shape[0]\n", " if type(nbins) == int:\n", " nbins = [nbins] * nobs\n", " else:\n", " assert len(nbins) == nobs, \"You must supply %d bin values\" % nobs\n", " self.env = env\n", " self.nbins = nbins\n", " self.widths = []\n", " for low, high, n in zip(env.observation_space.low,\n", " env.observation_space.high,\n", " nbins):\n", " self.widths.append((high-low)/n)\n", "\n", " self.state_map = {}\n", "\n", " \n", " def size(self):\n", " \"\"\"\n", " Return the size of the state space.\n", " \"\"\"\n", " \n", " return np.prod(self.nbins)\n", "\n", " \n", " def discretize(self, obs):\n", " \"\"\"\n", " Return the discrete state to which an observation corresponds.\n", " \"\"\"\n", " \n", " state = '' \n", " for value, low, width in zip(obs, env.observation_space.low, self.widths):\n", " state += '%d' % ((value - low)/width)\n", " if state not in self.state_map:\n", " self.state_map[state] = len(self.state_map)\n", " return self.state_map[state]" ] }, { "cell_type": "markdown", "id": "0986ccce-9e56-484d-8145-4338c71e541d", "metadata": {}, "source": [ "## Task 3: Experiment with different discretization granularities\n", "\n", "There is nothing to turn in for this task.\n", "\n", "Below is an example of running the Mountain Car environment for a few steps and printing out the observation and state. Note what happens when the number of bins is 10 in terms of which states are visited. Change it to other values, higher and lower, and see how the states change in terms of granularity." ] }, { "cell_type": "code", "execution_count": null, "id": "9fb8c83a-0a78-419e-bc79-4c184d0cb79a", "metadata": {}, "outputs": [], "source": [ "disc = Discrete(env, 10) # <--- Change the 10 to other values and explore\n", "\n", "print(\"Number of distinct states = %d\" % disc.size())\n", "\n", "for _ in range(50):\n", " action = env.action_space.sample()\n", " observation, reward, terminated, truncated, info = env.step(action)\n", " state = disc.discretize(observation)\n", " print ('State = %s, Observation = %s' % (state, observation))" ] }, { "cell_type": "markdown", "id": "1e1bb9d1-57f6-4ef9-ae71-49085c0bec9f", "metadata": {}, "source": [ "## Task 4: Finish the Q-learner\n", "\n", "The code that you write for this task will be part of your grade on this assignment.\n", "\n", "Below is a Q-learner class. It has an init() method and a method for choosing the greedy action. You'll need to\n", "* add a method for choosing an epsilon-greedy action\n", "* add a method for performing a Q update\n", "\n", "I've provided stubs for those methods. Recall that the epsilon-greedy action is one that is randomly chosen with probability epsilon and greedy with probability 1 - epsilon." ] }, { "cell_type": "code", "execution_count": null, "id": "d372467e-c9bd-462e-9eb2-c36fadba76a9", "metadata": {}, "outputs": [], "source": [ "class Q:\n", "\n", " def __init__(self, nstates, nactions):\n", " \"\"\"\n", " Arguments:\n", " nstates - The number of distinct states\n", " nactions - The number of distinct actions\n", " \"\"\"\n", " \n", " self.gamma = 0.999 # Discount factor\n", " self.alpha = 0.1 # Learning rate\n", " self.epsilon = 0.05 # Exploration probability\n", "\n", " # Create a Q-table initialized to 0\n", " self.table = np.zeros((nstates, nactions))\n", "\n", " \n", " def greedy_action(self, state):\n", " \"\"\"\n", " Return the greedy action for a state. If multiple actions have the \n", " same highest Q-value, choose one of them at random.\n", "\n", " Arguments:\n", " state - The state in which the action is to be taken\n", "\n", " Returns: The optimal action\n", " \"\"\"\n", " \n", " qmax = self.table[state].max()\n", " greedy = [idx for idx, value in enumerate(self.table[state]) if value == qmax]\n", " return random.choice(greedy)\n", "\n", "\n", " ###\n", " ### You need to write this\n", " ###\n", " def get_action(self, state):\n", " \"\"\"\n", " Choose an action that is epsilon greedy in a given state.\n", "\n", " Arguments:\n", " state - The state in which the action is to be taken\n", "\n", " Returns: The action\n", "\n", " \"\"\"\n", " \n", " pass\n", " \n", "\n", " ###\n", " ### You need to write this\n", " ###\n", " def update(self, state1, action, reward, state2):\n", " \"\"\"\n", " Given that an action was taken in state 1, leading to a specific reward and\n", " a transition to state 2, perform one update on the Q-table.\n", " \"\"\"\n", " \n", " pass " ] }, { "cell_type": "markdown", "id": "8751d562-e04b-47dd-8894-da7eec6af290", "metadata": {}, "source": [ "## Task 5: Watch a domain run\n", "\n", "There is nothing to turn in for this task.\n", "\n", "The function below allows you to run a domain using a Q-table for action selection and see what is going on. Try calling it with each of the domains below to see them in action. Note that the Q-table is initialized to all zeros so the greedy action is random.\n", "\n", "When you run the function you should see a window pop up with a visualization of the domain." ] }, { "cell_type": "code", "execution_count": null, "id": "4ab9f9dd-e8ab-4f0a-a73b-9fc21d84b2ea", "metadata": {}, "outputs": [], "source": [ "MOUNTAIN_CAR = \"MountainCar-v0\"\n", "ACROBOT = \"Acrobot-v1\"\n", "LUNAR_LANDER = \"LunarLander-v2\"" ] }, { "cell_type": "code", "execution_count": null, "id": "01d54ffa-8629-45af-8633-2d9a186018a3", "metadata": {}, "outputs": [], "source": [ "def run_domain(env, q, disc, steps):\n", " \"\"\"\n", " Arguments:\n", " env - A Gymnasium enviroment that was created with gym.make()\n", " q - A Q instance\n", " disc - A Discrete instance\n", " steps - The number of steps for which to run the domain\n", " \"\"\"\n", " \n", " observation, info = env.reset()\n", "\n", " for _ in tqdm(range(steps)):\n", " state = disc.discretize(observation)\n", " action = q.greedy_action(state) \n", " observation, reward, terminated, truncated, info = env.step(action)\n", " if terminated or truncated:\n", " observation, info = env.reset()" ] }, { "cell_type": "code", "execution_count": null, "id": "c96bfb4a-f27e-4b00-8e01-a98380a62fa2", "metadata": {}, "outputs": [], "source": [ "# Create a domain - NOTE that using render_mode \"human\" visualizes the domain\n", "env = gym.make(MOUNTAIN_CAR, render_mode=\"human\")\n", "\n", "# Create an object to discretize it\n", "disc = Discrete(env, 10)\n", "\n", "# Create a Q-learner\n", "q = Q(disc.size(), env.action_space.n)\n", "\n", "# Run the domain\n", "run_domain(env, q, disc, 500)" ] }, { "cell_type": "markdown", "id": "7420ba13-4a2b-4f2d-b360-908b483bdd25", "metadata": {}, "source": [ "## Task 6: Implement Q-learning and test it on the Mountain Car domain\n", "\n", "Your code for Q-learning will be part of your grade for this homework.\n", "\n", "The Mountain Car domain is the easiest one so you should start there. I've found that using the default parameters in the Q class, nbins = 30, and 500K steps you can learn an optimal policy.\n", "\n", "I've written a stub for the Q-learning function below that you can fill in.\n", "\n", "Things to keep in mind:\n", "* During training you want render_mode = None or it will be very slow\n", "* If a step() in the domain makes terminated or truncated true, that means the episode ended and you need to reset() the domain. You can look at run_domain() above to see how I handle that." ] }, { "cell_type": "code", "execution_count": null, "id": "77ee9259-0be9-447f-a860-ddaf587b56a5", "metadata": {}, "outputs": [], "source": [ "def learn_domain(env, q, disc, steps):\n", " \"\"\"\n", " Arguments:\n", " env - A Gymnasium enviroment that was created with gym.make()\n", " q - A Q instance\n", " disc - A Discrete instance\n", " steps - The number of steps for which to run the domain and perform Q updates\n", " \"\"\"\n", "\n", " pass" ] }, { "cell_type": "markdown", "id": "a9b10890-6786-4ed8-841f-d7c4e1290eed", "metadata": {}, "source": [ "The two cells below, if your Q-learner and training function are correct, will yield optimal behavior in the Mountain Car domain." ] }, { "cell_type": "code", "execution_count": null, "id": "b7060ac5-12a5-4bc5-a0e2-74e9249a1ed3", "metadata": {}, "outputs": [], "source": [ "# Create a Mountain Car with render_mode = None so that it runs fast\n", "env = gym.make(MOUNTAIN_CAR, render_mode=None)\n", "\n", "# Use 30 bins for discretition of each element of the observation\n", "disc = Discrete(env, 30)\n", "\n", "# Allocate a Q table\n", "q = Q(disc.size(), env.action_space.n)\n", "\n", "# Learn!\n", "learn_domain(env, q, disc, 500000)" ] }, { "cell_type": "code", "execution_count": null, "id": "94e44ea4-5648-4fe9-97ae-09a5a9dcfe47", "metadata": {}, "outputs": [], "source": [ "# Create a version of the domain with render_model = \"human\" so that you can watch it\n", "env = gym.make(MOUNTAIN_CAR, render_mode=\"human\")\n", "\n", "# Observe the policy running\n", "run_domain(env, q, disc, 500)" ] }, { "cell_type": "markdown", "id": "b10fd00c-0456-4e3e-9643-a8be6eae26cc", "metadata": {}, "source": [ "## Task 7: Experiment with all domaims\n", "\n", "You responses here will be part of your grade for this homework\n", "\n", "For each of the three domains:\n", "* Describe the behavior in the domain when the action selection is random (i.e., when the Q-table is first initialized and prior to any training). This can take the form of a few sentences that explain the behavior you are seeing and why random actions would lead to that behavior.\n", "* Experiment with a few (3) different values of nbins when discretizing, some small and some larger, and explain differences in the learned policy as manifest when you run it in \"human\" mode between the different levels of descetization. Again, describe the behavior you are seeing and why the level of discretization may have contributed to it, compared to the other behaviors you see for other levels of discretization.\n", "* For one run in which you got the best learning results plot something that convinces me that learning occured. That could be episode length through time (e.g., the faster the Mountain Car gets to the top of the hill the shorter the episodes) or reward through time (e.g., for the Lunar Lander). Explain how the plot shows evidence that the system is learning." ] }, { "cell_type": "code", "execution_count": null, "id": "e0dc39c0-ae58-4ee0-bf2a-e5e9e5d9c92f", "metadata": {}, "outputs": [], "source": [] } ], "metadata": { "kernelspec": { "display_name": "Python 3 (ipykernel)", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.11.4" } }, "nbformat": 4, "nbformat_minor": 5 }